At its core, the NLB framework uses THREE.js to store and render all model geometry to the main WebGL model canvas. However, the internal structures of THREE objects and meshes are very closely aligned to the requirements of graphics hardware for rendering complex 3D geometry as fast as possible. They are not intended as a data format for effectively or efficiently capturing detailed building or construction information.
Moreover, THREE.BufferGeometry objects (on which virtually everything relies and which provide the fundamental conduit between CPU and GPU) cannot be easily and dynamically resized, so must be carefully managed to maintain optimum rendering speeds and avoid excessive garbage collection when their content changes quite significantly during interactive editing.
As a Building Information Model (BIM) may be comprised of many tens of thousands of different elements, it is simply not practical for each individual element to maintain its own set of THREE meshes for all the different materials and geometry it may be composed of. Doing that would certainly be possible, and in some ways may even be preferable for small models, but the sheer number of draw calls required in each render would not scale for larger commercial or institutional buildings, or as the model develops to include detailed construction and service infrastructure elements.
Rather, it is necessary to store the geometry of individual element in some other way, and then quickly (re)generate THREE.Mesh objects assembled from multiple elements whenever any of their constituent geometry changes. This document outlines how this is done within the NLB framework.
Additional Constraints
The need to support THREE rendering is one thing, but another core aim of the NLB framework is to provide performance analysis and simulation feedback on the building model. To support this, the internal structure of stored geometry needs to be compatible with and/or convertible to and from all sorts of analytical models and other BIM formats. Whilst some model formats are pretty flexible and easily supported, others are much less so.
The two external formats that have most influenced how geometry is generated and stored in the framework are the IFC Standard and gbXML. Whilst the framework as a whole is capable of much more than is offered by these formats, many parts of its internal structure have been somewhat shaped and constrained by the need to directly support them. This has meant being two-way compatible with the gbXML ShellGeometry and ClosedShell components, as well as the IFC IfcSolidModel and IfcTessellatedFaceSet component hierarchies.
BIM.Level
The BIM.Level object is the primary place where THREE meshes are stored and managed. Levels belong to BIM.Structure, BIM.Building and BIM.Site objects, and are essentially a collection of spaces and/or geometry that sit at the same or very similar floor heights above the site datum. When part of a BIM.Building, they are effectively a storey (English) or story (American). This is also true when they are part of a BIM.Structure, but when that structure is a railway bridge or gas storage tank, the concept of a storey/story becomes less applicable. When they are part of a BIM.Site, several different levels may share the same height so they become more akin to layers.
Having each level store its own render meshes allows them to be quickly and independently turned on/off, animated up/down or faded in/out using mesh properties and/or transforms, which is infinitely more efficient than having to animate each element's geometry and/or display properties. Each level therefore has a meshes property that stores a BIM.Meshes object, which is a subclass of THREE.Group and stores three core meshes and a number of on-demand meshes. The core meshes are used for rendering surface geometry, transparent glazing, edge outlines, thick and dotted line work as well as text and other annotations. The on-demand meshes are used for showing things that have very specific material requirements, such as analysis grid data, site works, vegetation and image overlays when they are present on a level.
Thus, one of the key roles of a level is to manage the rebuilding and updating of each of its meshes to ensure that the rendered model responds appropriately to changes in individual elements. Doing this quickly and effectively is critical to the dynamic response of the model as the user interactively selects and drags geometry around or changes element parameters. This is typically managed by the host application using the various registered PD.UserModeHandler instances, which are responsible for calling the rebuild() method on each affected level when required.
Building Level Geometry
Another key role of a level is to assemble the external geometry of all the spaces, walls, columns and other building elements it contains into one or more boundary footprints and envelopes. This is a relatively complex process, but required in order to determine which parts of each wall, slab, ceiling or roof are adjacent to other spaces (and therefore use their internal material) or exposed to the outside (using their external material). As this determines the thickness of each element, it also affects the position and nature of windows, doors and other apertures, which can only be resolved once this process has taken place.
The rebuild process involves iterating through all of the building elements that belong to that level to generate their inner and outer footprints, resolving the junctions between them, and creating an outer boundary from which the external wall surfaces of the building envelope can be generated. The next step is to iterate though all of the apertures within those elements to add the appropriate holes to the external walls of the building envelope, the internal walls of adjacent spaces, or the ceilings and floors of the levels immediately above or below.
The level also needs to manage those elements that do not affect the building form - such as analysis grids, trees, rocks and other landscaping elements - so that it can avoid rebuilding the entire level unnecessarily when only these are changing.
Updating level meshes efficiently requires elements to support a three step build and render process:
-
Step 1 - Build Pass: This step provides the opportunity for each element to rebuild its own stored geometry if it has changed in any way since the last level update. Each visible element's
update()method is called to check if it has been edited and allow it to regenerate its own internal geometry. The level then uses each element's stored geometry to build its outer boundary footprint, extrude external wall surfaces and update its graph of element path(s). -
Step 2 - Clip Pass: This step allows apertures to find and insert holes into adjacent surfaces, and for external elements to clip themselves to the level outer boundary if they need to. The level is smart enough to have already assembled a list of those particular elements that need to do this during the first step, so it is not a full model iteration.
-
Step 3 - Render Pass: This final step allows each element to add its stored geometry to the various level meshes to be rendered. The level first resets all its meshes ready for reuse, and then calls each visible element's
render()method. The level then adds its own geometry to the meshes and updates them before requesting a scene redraw.
BIM.Element
To support the updating of level render meshes, each BIM.Element object implements three core lifecycle methods:
-
update(): In this method, the element checks to see if it or any of its components need to be rebuilt. If so, it calls its own rebuild method and sets or resets all the update indicator flags (such ashasChangedandwasRebuilt). -
rebuild(): This method does the actual rebuilding of the element's geometry. For most elements this means rebuilding their ownshellorBRep(s), as well as any shells or surfaces on thejunctionswithin theirpath. For those elements that manage their own render mesh(es), it may also mean rebuilding their mesh or doing some preparatory calculations for doing it quickly within the render method. -
render(): This method is where the element's stored geometry is added to the appropriate level render meshes. Most geometry objects such as Shells, BReps, Paths, Polylines and Polygons all have a range ofcopyToPolyMesh()methods that do this quickly and simply. The role of this render method is to work out exactly what should be rendered and how within different views. For example, tree, space and retaining wall elements may render themselves quite differently in 3D than they do in 2D plan section view.
Levels will call the update() and render() methods on each of its elements as required and at different types during the rebuild process.
Element Types
Many types of element in the framework are generated parametrically based on a path which they follow. This works well for most elements as it allows them to be easily edited, aligned with and snapped to other paths, and to embody some of the design intent behind them (explained later). However, not all elements in a building model are parametric and may need to have a fixed shape that does not follow any path. For example, some manufactured objects only come in very specific shapes or when an engineered element has to remain exactly as the engineer designed it. As a result, there are some different types of core element classes.
-
BIM.ElementThis is the base for all other element types and provides options to use a path, shell, BRep or any combination as the basis for the element's stored geometry. Base elements are typically parametric, generating their geometry dynamically from their path as it is interactively edited by the user. -
BIM.Element.ExternalExternal elements typically live outside the building envelope and this class provides the mechanisms for trimming the element's geometry to their level's building boundary once it has been generated. -
BIM.Element.CompositeComposite elements are essentially an assembly of child elements than can be edited either independently or as a group. This class provides the mechanisms for adding/removing child elements as well as selecting and editing them. -
BIM.Element.EngineeredEngineered elements are typically one-off designs provided as-is by a consultant or types of fixtures, fittings or equipment that only come in one size and shape. They have fixed geometry that is either imported or provided by a manufacturer-supplied plugin, and stored as a Shell or BRep, or both. Such elements can be moved around, scaled and rotated - but their core shape is either static or not dynamically editable. -
BIM.Element.THREETHREE-based elements are used to store imported geometry. The geometry file is imported using one of the THREE.Loader subclasses and converted to a hierarchy of THREE objects. This hierarchy is simply added to the host level'sBIM.Meshesinstance. Like others, such elements can be moved around, scaled and rotated - but their core shape is static and not editable.
PD.Path
All BIM elements have the option to store a BIM.Path object that positions it within the model and defines a route it can follow. Paths are basically a PD.Polyline where each point is a BIM.Junction object rather than a PD.Point or THREE.Vector3. As such, they can be an open line or a closed loop. When closed, the isPolygon flag is set so that they represent a single shape with the first contour defining its outer boundary and any subsequent contours defining holes or voids within it.
Paths are a key part of the BIM process as they essentially capture the design intent behind an element. Rather than first drawing a floor slab and then drawing a separate line for a wall that is inset by half the wall thickness, both the slab and the wall can follow exactly the same path, with the outer edge of the slab automatically extended by the right amount to accommodate the wall above it. Because they follow the same path, changing one of the junctions updates both the slab and the wall. Moreover, other elements can snap to that path and know which other elements follow it so that they can adjust themselves to make the right connections. Rather than having to stop an element slightly short of another to accommodate its size, the aim of using paths is to allow you to snap the path of one element to the path of another and either have their inter-connection automatically worked out or manually adjust or offset the geometry relative to the snapped junction to achieve the desired result.
Thus, think of paths as defining the base skeleton of a model, with its more detailed geometry being generated from all the inter-connected bones and joints. Elements that follow those paths can automatically (re)generate their stored shell, BRep or mesh geometry using their own specific dynamic parameters as well as the junctions along the path and any relevant information they can determine from other elements that share or intersect that path.
Again, whilst this applies to most elements, to not all elements are path-based or parametric.
PD.Shell
The PD.Shell class stores element geometry as a series of connected planar PD.Polygon surfaces and, as such, is very similar to the gbXML ShellGeometry and ClosedShell components. Thus, shells are used mainly by spaces and other elements that form part of the building structure or its envelope, and are used to generate the spatial adjacency information required by energy and thermal analytical models.
Shells are typically generated directly from an element's path, either by extrusion or some other generative process. The individual facets of the shell are stored as PD.Polygon objects, which each have their own normal, plane equation, surface triangulation, and optional surface color or material reference. This allows a single shell to store the geometry of an element that is constructed from surfaces with different materials.
This is important as shells also play a key role in the interactive selection and dragging of elements within the model. Whilst the element's path is selectable and editable, its shell is also used for selection ray intersection when determining which part of the model the user clicked on. Thus, whilst storing each facet as a polygon adds some amount of overhead, it does make this process much faster and simpler as the triangles that make up each face all share the same plane equation, which is precalculated and stored for each polygon. This also makes the process of manually selecting/editing parts of the shell itself, and ray-tracing it for analysis purposes, much more practical and efficient.
Another key point is that shells are designed to be reusable. When dynamically editing a model, the shells of all the walls, spaces and apertures affected by that edit may need to be regenerated entirely up to 60 times a second to provide the required visual feedback. If they were not reusable, the resulting garbage collection would make that process anything but smooth and dynamic. Thus, it is absolutely critical that geometry only be added to a shell inside your class's rebuild() method or between calls to its own reuseStart() and reuseEnd() methods. If not, you will end up forever increasing the size of your shell with replicated geometry until memory limits are exceeded.
The default rebuild() method of an element automatically calls reuseStart() at the beginning and reuseEnd() at the end, so you do not have to worry about doing this when creating custom BIM.Component subclasses. However, if you are creating your own BIM.Element subclass that uses a shell and overrides the rebuild() method, then you should use the following code as a basis and for reference.
// The default `rebuild()` method.
rebuild() {
// Check shell.
this.ensureValidShell();
this.shell.reuseStart();
if (this.typeComponent && this.path && this.path.hasContent()) {
// Build shell geometry.
this.typeComponent.rebuild(this);
// Link shell to this path.
this.shell.linkedToPath = this.path;
}
this.shell.reuseEnd();
return this;
};
PD.BRep
Most core building elements such as walls, floors, ceilings, roofs and apertures tend to have sharp corners and flat surfaces in at least two of their three dimensions, so are well suited to being represented by a faceted shell. However, undulating terrain, plants with thousands of individual leaves or shapes with spherical, cylindrical or other curved surfaces are not. These types of elements have the option of using a PD.BRep boundary representation to store their geometry.
BReps offer a lighter-weight alternative when storing large numbers of polygonal facets or highly triangulated surfaces, and are designed to handle both curved and flat faces equally well. Their only downside is that it is simply not practical to precalculate and store the plane equation of every individual triangle within each curved face, of which there may often be several thousands. This makes the interactive selection and ray-tracing of surfaces with highly curved faces less efficient and a bit slower than using a faceted shell.
To get around this problem, several elements use both a surface and a shell to store their geometry. For example, many BIM.Tree elements store their detailed trunk and canopy geometry as a BRep and a much simpler convex hull shape as their shell geometry. Similarly, railings and fences use BReps to store the detailed post, rail and infill geometry, and a shell to store single faces representing their span over each path segment. This makes interactive selection, editing and highlighting much simpler and faster, whilst still allowing for very complex geometrical representations.
When performing more detailed analytical ray-tracing, the framework maintains an optimised octree of triangles from all visible geometry, so there is very little difference between using a shell or a boundary representation. Also, BReps are used as the core of all constructive solid geometry (CSG) operations within the framework.
Like shells, BReps are designed to be reusable and have the same reuseStart() and reuseEnd() methods as a shell, and are used in the same way. To reuse vertices and faces, it is critical that geometry only be added between calls to these methods. If not, you run the risk of forever increasing the size of the shape with replicated geometry until memory limits are exceeded.
PD.PolyMesh
A PD.PolyMesh is a custom subclass of a THREE.Mesh that is designed to be reusable, to share its vertex attribute buffers between both surface and line geometry, and allow its buffers to be dynamically resized.
When rendering technical drawings of CAD and BIM geometry, it is desirable to render both the surfaces of that geometry and the outline of hard edges between surfaces. In THREE.js, mesh objects can only be rendered as TRIANGLES, LINES or POINTS. As lines defining the boundary of a surface are typically different to its surface triangulation, to render both the surfaces (TRIANGLES) and edge outlines (LINES) of a surface would normally require maintaining two separate meshes. However, the edge outlines typically connect between exactly the same vertices as the triangulation, just in a different sequence, so it makes sense in this framework to share the same vertices between the two renderings.
Just like the PD.Shell and PD.BRep classes, meshes need to be reusable and able to handle being recreated up to 60 times a second without triggering garbage collection when the user is dynamically editing a model. Thus, it is absolutely critical that geometry only be added to a mesh between calls to its reset() and update() methods. If not, you will end up forever increasing the size of its buffers with replicated geometry until GPU memory limits are exceeded.
This is handled automatically by the BIM.Level before and after it calls any render() methods on its elements, so you will typically never have to worry about this. However, if you create a custom element that uses its own render mesh(es), then you will need to ensure that you call reset() before you start adding mesh geometry in your overridden rebuild() or render() methods, and then update() when you are done and before you hand back control to the level.
PD.Shape
The PD.Shape geometry class is much less used, but one that is still worth knowing about. It differs from other geometry classes in that all its vertex, normal, color and texture coordinate data are stored as vector arrays rather than vector objects. The PD.Path, PD.Polyline, PD.Polygon, PD.Shell and PD.BRep classes all build and store geometry as THREE.Vector3 objects, or one of its subclasses (such as PD.Point, PD.PathPoint or BIM.Junction). However, there are several external libraries used for shape generation that use [x,y] or [x,y,z] vector arrays instead of {x,y,z} point objects. Also, GPU vertex attribute buffers store their data as long flat arrays of numbers, which makes the conversion from vector arrays fast and trivial using Array.flat() method.
The PD.Shape class is therefore typically used as an intermediary step in the creation of geometry that relies on one or more external libraries that uses vector arrays. Also, there are certain types of vector-array-based mathematical modifications that can be applied to faceted manifold shapes that can open up a whole range of additional geometrical design opportunities (see the Polyhedra Generator application).
When used together with the PD.VectorArray, PD.PlaneArray, PD.MatrixArray, PD.QuaternionArray classes, polyhedron geometry offers a lot of potentially interesting opportunities for shape experimentation.
Points and Positions
Points are used to define a particular spatial position within the model. As a result, all points and point types contain x, y and z properties that store the position of the point relative to the global origin (0,0,0) in each of the three Cartesian axis. This framework uses THREE.js, so all points and point types derive from THREE.Vector3. The one exception to this is the PD.VectorArray, which stores point data as an [x,y,z] vector array and is provided to make working with some external libraries much easier.
In order to optimise both performance and memory usage, the framework uses a hierarchy of point classes that each contain additional properties. The following is an outline of each class and its role within the framework.
THREE.Vector3
This is the base class of all points. It adds the core x, y and z properties, and provides many of the basic vector math methods. These are used frequently by the framework as temporary variables within geometry creation methods to receive position-based results and for passing as arguments to the basic vector math methods.
PD.Point
This is the core point type used by classes within the framework. They are used as vertices in all PD.Shell, PD.BRep and PD.Curve geometry, and add an additional attribute map and shell index property to the object.
As not all shells, surfaces and curves require anything more than just position information, the majority of PD.Point instances will simply store an empty attribute map and never access it. For any shells, surfaces and curves that do require additional information, the attribute map provides a means of storing that data in a way that previous points can still be cached and reused very efficiently without affecting their shape within the code optimiser.
For commonly used attributes (such as surface normals, tangents, binormals and color), the class provides getter/setter methods that directly access the attribute map, creating and disposing entries as required. It also provides getter/setter methods for storing the current position of the point, and methods to support the interactive movement and transformation of points relative to that reference position. This is used extensively by the model editing classes.
PD.PathPoint
PD.PathPoint objects are used in higher-level PD.Path instances, which define how a series of points are connected together by lines or curves. They are used extensively within the framework for the generation of model geometry - and frequently need to be offset, extruded, lofted, intersected, cut, clipped, joined, and all manner of different transformations applied.
To reduce the number of trigonometric computations required in many of these processes, path points add additional path-related properties such as their incoming/outgoing vectors and distances, incoming/outgoing segment normals, the angle they make between the previous and next points, and whether that angle is convex or concave relative to the path as a whole. These values are updated whenever any part of the path changes.
BIM.Junction
BIM.Junction objects are PD.PathPoint objects with additional information such as construction type or material properties (such as thickness or surface coverings), as well as arrays for BIM.Aperture components and PD.Relationship instances. Junctions may also have their own PD.Shell that some elements use to store additional display geometry and others for selection and/or constraining first-person navigation.